79 research outputs found

    Mechatronics & the cloud

    Get PDF
    Conventionally, the engineering design process has assumed that the design team is able to exercise control over all elements of the design, either directly or indirectly in the case of sub-systems through their specifications. The introduction of Cyber-Physical Systems (CPS) and the Internet of Things (IoT) means that a design team’s ability to have control over all elements of a system is no longer the case, particularly as the actual system configuration may well be being dynamically reconfigured in real-time according to user (and vendor) context and need. Additionally, the integration of the Internet of Things with elements of Big Data means that information becomes a commodity to be autonomously traded by and between systems, again according to context and need, all of which has implications for the privacy of system users. The paper therefore considers the relationship between mechatronics and cloud-basedtechnologies in relation to issues such as the distribution of functionality and user privacy

    Privacy matters:issues within mechatronics

    Get PDF
    As mechatronic devices and components become increasingly integrated with and within wider systems concepts such as Cyber-Physical Systems and the Internet of Things, designer engineers are faced with new sets of challenges in areas such as privacy. The paper looks at the current, and potential future, of privacy legislation, regulations and standards and considers how these are likely to impact on the way in which mechatronics is perceived and viewed. The emphasis is not therefore on technical issues, though these are brought into consideration where relevant, but on the soft, or human centred, issues associated with achieving user privacy

    μ-MAR: Multiplane 3D Marker based Registration for depth-sensing cameras

    Get PDF
    Many applications including object reconstruction, robot guidance, and. scene mapping require the registration of multiple views from a scene to generate a complete geometric and appearance model of it. In real situations, transformations between views are unknown and it is necessary to apply expert inference to estimate them. In the last few years, the emergence of low-cost depth-sensing cameras has strengthened the research on this topic, motivating a plethora of new applications. Although they have enough resolution and accuracy for many applications, some situations may not be solved with general state-of-the-art registration methods due to the signal-to-noise ratio (SNR) and the resolution of the data provided. The problem of working with low SNR data, in general terms, may appear in any 3D system, then it is necessary to propose novel solutions in this aspect. In this paper, we propose a method, μ-MAR, able to both coarse and fine register sets of 3D points provided by low-cost depth-sensing cameras, despite it is not restricted to these sensors, into a common coordinate system. The method is able to overcome the noisy data problem by means of using a model-based solution of multiplane registration. Specifically, it iteratively registers 3D markers composed by multiple planes extracted from points of multiple views of the scene. As the markers and the object of interest are static in the scenario, the transformations obtained for the markers are applied to the object in order to reconstruct it. Experiments have been performed using synthetic and real data. The synthetic data allows a qualitative and quantitative evaluation by means of visual inspection and Hausdorff distance respectively. The real data experiments show the performance of the proposal using data acquired by a Primesense Carmine RGB-D sensor. The method has been compared to several state-of-the-art methods. The results show the good performance of the μ-MAR to register objects with high accuracy in presence of noisy data outperforming the existing methods.This work has been supported by grant University of Alicante projects GRE11-01 and grant Valencian Government GV/2013/005

    A Review of Modelling and Simulation Methods for Flashover Prediction in Confined Space Fires

    Get PDF
    Confined space fires are common emergencies in our society. Enclosure size, ventilation, or type and quantity of fuel involved are factors that determine the fire evolution in these situations. In some cases, favourable conditions may give rise to a flashover phenomenon. However, the difficulty of handling this complicated emergency through fire services can have fatal consequences for their staff. Therefore, there is a huge demand for new methods and technologies to tackle this life-threatening emergency. Modelling and simulation techniques have been adopted to conduct research due to the complexity of obtaining a real cases database related to this phenomenon. In this paper, a review of the literature related to the modelling and simulation of enclosure fires with respect to the flashover phenomenon is carried out. Furthermore, the related literature for comparing images from thermal cameras with computed images is reviewed. Finally, the suitability of artificial intelligence (AI) techniques for flashover prediction in enclosed spaces is also surveyed.This work has been partially funded by the Spanish Government TIN2017-89069-R grant supported with Feder funds. This work was supported in part by the Spanish Ministry of Science, Innovation and Universities through the Project ECLIPSE-UA under Grant RTI2018-094283-B-C32 and the Lucentia AGI Grant

    A Novel Prediction Method for Early Recognition of Global Human Behaviour in Image Sequences

    Get PDF
    Human behaviour recognition has been, and still remains, a challenging problem that involves different areas of computational intelligence. The automated understanding of people activities from video sequences is an open research topic in which the computer vision and pattern recognition areas have made big efforts. In this paper, the problem is studied from a prediction point of view. We propose a novel method able to early detect behaviour using a small portion of the input, in addition to the capabilities of it to predict behaviour from new inputs. Specifically, we propose a predictive method based on a simple representation of trajectories of a person in the scene which allows a high level understanding of the global human behaviour. The representation of the trajectory is used as a descriptor of the activity of the individual. The descriptors are used as a cue of a classification stage for pattern recognition purposes. Classifiers are trained using the trajectory representation of the complete sequence. However, partial sequences are processed to evaluate the early prediction capabilities having a specific observation time of the scene. The experiments have been carried out using the three different dataset of the CAVIAR database taken into account the behaviour of an individual. Additionally, different classic classifiers have been used for experimentation in order to evaluate the robustness of the proposal. Results confirm the high accuracy of the proposal on the early recognition of people behaviours.This work was supported in part by the University of Alicante, Valencian Government and Spanish government under grants GRE11-01, GV/2013/005 and DPI2013-40534-R

    Adjustable compression method for still JPEG images

    Get PDF
    There are a large number of image processing applications that work with different performance requirements and available resources. Recent advances in image compression focus on reducing image size and processing time, but offer no real-time solutions for providing time/quality flexibility of the resulting image, such as using them to transmit the image contents of web pages. In this paper we propose a method for encoding still images based on the JPEG standard that allows the compression/decompression time cost and image quality to be adjusted to the needs of each application and to the bandwidth conditions of the network. The real-time control is based on a collection of adjustable parameters relating both to aspects of implementation and to the hardware with which the algorithm is processed. The proposed encoding system is evaluated in terms of compression ratio, processing delay and quality of the compressed image when compared with the standard method

    Three-dimensional planar model estimation using multi-constraint knowledge based on k-means and RANSAC

    Get PDF
    Plane model extraction from three-dimensional point clouds is a necessary step in many different applications such as planar object reconstruction, indoor mapping and indoor localization. Different RANdom SAmple Consensus (RANSAC)-based methods have been proposed for this purpose in recent years. In this study, we propose a novel method-based on RANSAC called Multiplane Model Estimation, which can estimate multiple plane models simultaneously from a noisy point cloud using the knowledge extracted from a scene (or an object) in order to reconstruct it accurately. This method comprises two steps: first, it clusters the data into planar faces that preserve some constraints defined by knowledge related to the object (e.g., the angles between faces); and second, the models of the planes are estimated based on these data using a novel multi-constraint RANSAC. We performed experiments in the clustering and RANSAC stages, which showed that the proposed method performed better than state-of-the-art methods

    Visual analysis of fatigue in Industry 4.0

    Get PDF
    The performance of manufacturing operations relies heavily on the operators’ performance. When operators begin to exhibit signs of fatigue, both their individual performance and the overall performance of the manufacturing plant tend to decline. This research presents a methodology for analyzing fatigue in assembly operations, considering indicators such as the EAR (Eye Aspect Ratio) indicator, operator pose, and elapsed operating time. To facilitate the analysis, a dataset of assembly operations was generated and recorded from three different perspectives: frontal, lateral, and top views. The top view enables the analysis of the operator’s face and posture to identify hand positions. By labeling the actions in our dataset, we train a deep learning system to recognize the sequence of operator actions required to complete the operation. Additionally, we propose a model for determining the level of fatigue by processing multimodal information acquired from various sources, including eye blink rate, operator pose, and task duration during assembly operations.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. “A way of making Europe” European Regional Development Fund (ERDF) and MCIN/AEI/10.13039/501100011033 for supporting this work under the MoDeaAS project (grant PID2019-104818RB-I00)
    corecore